Admissible, consistent multiple testing with applications including variable selection
نویسندگان
چکیده
منابع مشابه
Multiple Testing and the Variable-Selection Problem
We study the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Specifically, we compare empirical-Bayes (EB) and fully Bayesian (FB) approaches for handling the prior inclusion probability p required by these priors. Several new information-theoretic results, along with extensive computer experiments, lead us to conclude that the empirical-Bayes...
متن کاملMultiple Testing, Empirical Bayes, and the Variable-Selection Problem
This paper studies the multiplicity-correction effect of standard Bayesian variableselection priors in linear regression. The first goal of the paper is to clarify when, and how, multiplicity correction is automatic in Bayesian analysis, and contrast this multiplicity correction with the Bayesian Ockham’s-razor effect. Secondly, we contrast empirical-Bayes and fully Bayesian approaches to varia...
متن کاملRegularizing Lasso: a Consistent Variable Selection Method
Table 1 provides the average computational time (in minutes) for the eight methods under the simulation settings. SIS clearly requires the least computational effort, whereas RLASSO as well as Scout require much longer computational time. But all methods except RLASSO(CLIME) can be computed under a reasonable amount of time for p = 5000 and n = 100. RLASSO(CLIME) takes much longer because of in...
متن کاملSelf-consistent multiple testing procedures
We study the control of the false discovery rate (FDR) for a general class of multiple testing procedures. We introduce a general condition, called “self-consistency”, on the set of hypotheses rejected by the procedure, which we show is sufficient to ensure the control of the corresponding false discovery rate under various conditions on the distribution of the pvalues. Maximizing the size of t...
متن کاملConsistent selection of tuning parameters via variable selection stability
Penalized regression models are popularly used in high-dimensional data analysis to conduct variable selection and model fitting simultaneously. Whereas success has been widely reported in literature, their performances largely depend on the tuning parameters that balance the trade-off between model fitting and model sparsity. Existing tuning criteria mainly follow the route of minimizing the e...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Electronic Journal of Statistics
سال: 2009
ISSN: 1935-7524
DOI: 10.1214/09-ejs391